In computer architecture, a bus (historically also called a data highway or databus) is a communication system that transfers data between components inside a computer or between computers. It encompasses both hardware (e.g., wires, optical fiber) and software, including communication protocols. At its core, a bus is a shared physical pathway, typically composed of wires, traces on a circuit board, or busbars, that allows multiple devices to communicate. To prevent conflicts and ensure orderly data exchange, buses rely on a communication protocol to manage which device can transmit data at a given time.
Buses are categorized based on their role, such as system buses (also known as internal buses, internal data buses, or memory buses) connecting the CPU and memory. Expansion buses, also called peripheral buses, extend the system to connect additional devices, including peripherals. Examples of widely used buses include PCI Express (PCIe) for high-speed internal connections and Universal Serial Bus (USB) for connecting external devices.
Modern buses utilize both parallel and serial communication, employing advanced encoding methods to maximize speed and efficiency. Features such as direct memory access (DMA) further enhance performance by allowing data transfers directly between devices and memory without requiring CPU intervention.
Typically two additional pins in the control busrow-address strobe (RAS) and column-address strobe (CAS)are used to tell the DRAM whether the address bus is currently sending the first half of the memory address or the second half.
Historically, there were also some examples of computers that were only able to address words.
The transition from parallel to serial buses was allowed by Moore's law which allowed for the incorporation of SerDes in integrated circuits which are used in computers.
computer network connections such as Ethernet are not generally regarded as buses, although the difference is largely conceptual rather than practical. An attribute generally used to characterize a bus is that power is provided by the bus for the connected hardware. This emphasizes the busbar origins of bus architecture as supplying switched or distributed power. This excludes, as buses, schemes such as serial RS-232, parallel Centronics, IEEE 1284 interfaces and Ethernet, since these devices also needed separate power supplies. Universal Serial Bus devices may use the bus supplied power, but often use a separate power source. This distinction is exemplified by a telephone system with a connected modem, where the RJ11 connection and associated modulated signalling scheme is not considered a bus, and is analogous to an Ethernet connection. A phone line connection scheme is not considered to be a bus with respect to signals, but the Central Office uses buses with for connections between phones.
However, this distinctionthat power is provided by the busis not the case in many avionics, where data connections such as ARINC 429, ARINC 629, MIL-STD-1553B (STANAG 3838), and EFABus (STANAG 3910) are commonly referred to as data buses or, sometimes, databuses. Such avionic data buses are usually characterized by having several Line Replaceable Items/Units (LRI/LRUs) connected to a common, shared media. They may, as with ARINC 429, be simplex, i.e. have a single source LRI/LRU or, as with ARINC 629, MIL-STD-1553B, and STANAG 3910, be duplex, allow all the connected LRI/LRUs to act, at different times (half duplex), as transmitters and receivers of data.Avionic Systems Standardisation Committee, Guide to Digital Interface Standards For Military Avionic Applications, ASSC/110/6/2, Issue 2, September 2003
The frequency or the speed of a bus is measured in Hz such as MHz and determines how many clock cycles there are per second; there can be one or more data transfers per clock cycle. If there is a single transfer per clock cycle it is known as Single Data Rate (SDR), and if there are two transfers per clock cycle it is known as Double Data Rate (DDR) although the use of signalling other than SDR is uncommon outside of RAM. An example of this is PCIe which uses SDR. Within each data transfer there can be multiple bits of data. This is described as the width of a bus which is the number of bits the bus can transfer per clock cycle and can be synonymous with the number of physical electrical conductors the bus has if each conductor transfers one bit at a time. The data rate in bits per second can be obtained by multiplying the number of bits per clock cycle times the frequency times the number of transfers per clock cycle. Alternatively a bus such as PCIe can use modulation or encoding such as PAM4 which groups 2 bits into symbols which are then transferred instead of the bits themselves, and allows for an increase in data transfer speed without increasing the frequency of the bus. The effective or real data transfer speed/rate may be lower due to the use of encoding that also allows for error correction such as 128/130b (b for bit) encoding. The data transfer speed is also known as the bandwidth.
Some processors use a dedicated wire for each bit of the address bus, data bus, and the control bus. For example, the 64-pin STEbus is composed of 8 physical wires dedicated to the 8-bit data bus, 20 physical wires dedicated to the 20-bit address bus, 21 physical wires dedicated to the control bus, and 15 physical wires dedicated to various power buses.
Bus multiplexing requires fewer wires, which reduces costs in many early microprocessors and DRAM chips. One common multiplexing scheme, address multiplexing, has already been mentioned. Another multiplexing scheme re-uses the address bus pins as the data bus pins, an approach used by conventional PCI and the 8086. The various serial buses can be seen as the ultimate limit of multiplexing, sending each of the address bits and each of the data bits, one at a time, through a single pin (or a single differential pair).
One of the first complications was the use of . Early computer programs performed I/O by Busy waiting for the peripheral to become ready. This was a waste of time for programs that had other tasks to do. Also, if the program attempted to perform those other tasks, it might take too long for the program to check again, resulting in loss of data. Engineers thus arranged for the peripherals to interrupt the CPU. The interrupts had to be prioritized, because the CPU can only execute code for one peripheral at a time, and some devices are more time-critical than others.
High-end systems introduced the idea of channel controllers, which were essentially small computers dedicated to handling the input and output of a given bus. IBM introduced these on the IBM 709 in 1958, and they became a common feature of their platforms. Other high-performance vendors like Control Data Corporation implemented similar designs. Generally, the channel controllers would do their best to run all of the bus operations internally, moving data when the CPU was known to be busy elsewhere if possible, and only using interrupts when necessary. This greatly reduced CPU load, and provided better overall system performance.
To provide modularity, memory and I/O buses can be combined into a unified system bus. In this case, a single mechanical and electrical system can be used to connect together many of the system components, or in some cases, all of them.
Later computer programs began to share memory common to several CPUs. Access to this memory bus had to be prioritized, as well. The simple way to prioritize interrupts or bus access was with a daisy chain. In this case signals will naturally flow through the bus in physical or logical order, eliminating the need for complex scheduling.
Early microcomputer bus systems were essentially a passive backplane connected directly or through buffer amplifiers to the pins of the CPU. Memory and other devices would be added to the bus using the same address and data pins as the CPU itself used, connected in parallel. Communication was controlled by the CPU, which read and wrote data from the devices as if they are blocks of memory, using the same instructions, all timed by a central clock controlling the speed of the CPU. Still, devices the CPU by signaling on separate CPU pins.
For instance, a disk drive controller would signal the CPU that new data was ready to be read, at which point the CPU would move the data by reading the memory location that corresponded to the disk drive. Almost all early microcomputers were built in this fashion, starting with the S-100 bus in the Altair 8800 computer system.
In some instances, most notably in the IBM PC, although similar physical architecture can be employed, instructions to access peripherals (in and out) and memory (mov and others) have not been made uniform at all, and still generate distinct CPU signals, that could be used to implement a separate I/O bus.
These simple bus systems had a serious drawback when used for general-purpose computers. All the equipment on the bus had to talk at the same speed, as it shared a single clock.
Increasing the speed of the CPU becomes harder, because the speed of all the devices must increase as well. When it is not practical or economical to have all devices as fast as the CPU, the CPU must either enter a wait state, or work at a slower clock frequency temporarily, to talk to other devices in the computer. While acceptable in embedded systems, this problem was not tolerated for long in general-purpose, user-expandable computers.
Such bus systems are also difficult to configure when constructed from common off-the-shelf equipment. Typically each added expansion card requires many jumpers in order to set memory addresses, I/O addresses, interrupt priorities, and interrupt numbers.
However, these newer systems shared one quality with their earlier cousins, in that everyone on the bus had to talk at the same speed. While the CPU was now isolated and could increase speed, CPUs and memory continued to increase in speed much faster than the buses they talked to. The result was that the bus speeds were now much slower than what a modern system needed, and the machines were left starved for data. A particularly common example of this problem was that quickly outran even the newer bus systems like PCI, and computers began to include AGP just to drive the video card. By 2004 AGP was outgrown again by high-end video cards and other peripherals and has been replaced by the new PCI Express bus.
An increasing number of external devices started employing their own bus systems as well. When disk drives were first introduced, they would be added to the machine with a card plugged into the bus, which is why computers have so many slots on the bus. But through the 1980s and 1990s, new systems like SCSI and IDE were introduced to serve this need, leaving most slots in modern systems empty. Today there are likely to be about five different buses in the typical machine, supporting various devices.
Buses such as Wishbone have been developed by the open source hardware movement in an attempt to further remove legal and patent constraints from computer design.
The Compute Express Link (CXL) is an open standard interconnect for high-speed CPU-to-device and CPU-to-memory, designed to accelerate next-generation data center performance.
Other serial buses include:
|
|